25 research outputs found

    Standing together for reproducibility in large-scale computing: report on reproducibility@XSEDE

    Get PDF
    This is the final report on reproducibility@xsede, a one-day workshop held in conjunction with XSEDE14, the annual conference of the Extreme Science and Engineering Discovery Environment (XSEDE). The workshop's discussion-oriented agenda focused on reproducibility in large-scale computational research. Two important themes capture the spirit of the workshop submissions and discussions: (1) organizational stakeholders, especially supercomputer centers, are in a unique position to promote, enable, and support reproducible research; and (2) individual researchers should conduct each experiment as though someone will replicate that experiment. Participants documented numerous issues, questions, technologies, practices, and potentially promising initiatives emerging from the discussion, but also highlighted four areas of particular interest to XSEDE: (1) documentation and training that promotes reproducible research; (2) system-level tools that provide build- and run-time information at the level of the individual job; (3) the need to model best practices in research collaborations involving XSEDE staff; and (4) continued work on gateways and related technologies. In addition, an intriguing question emerged from the day's interactions: would there be value in establishing an annual award for excellence in reproducible research? Overvie

    Execution MutionI with Quantitative Temporal Bayesian Networks

    No full text
    The goal ofexec0fiAQ monitoring is to determine whether a system or person is following a plan appropriately. Monitoring information may beunc0AQF0# and the plan being monitored may havec omplex temporalc onstraints. We develop a new framework for reasoning underuncrQ8flI ty with quantitative temporalc onstraints -- Quantitative Temporal Bayesian Networks -- and we disc#2 its applicQF20 to plan-exec02Afl monitoring. QTBNs extend the major previous approac hes to temporal reasoning underuncrQflflI ty: Time Nets (Kanazawa 1991), Dynamic Bayesian Networks and Dynamic Objec Oriented Bayesian Networks (Friedman, Koller, & Pfe#er 1998). We argue that Time Netsct model quantitative temporal relationships butctQ88 easily model thec hanging values of fluents, while DBNs and DOOBNs naturally model fluents, but not quantitative temporal relationships. Both chQICICQFC0 are required forexec#82U monitoring, and are supported by QTBNs

    Matching 2.5D face scans to 3D models

    No full text
    The performance of face recognition systems that use two-dimensional images depends on factors such as lighting and subject’s pose. We are developing a face recognition system that utilizes three-dimensional shape information to make the system more robust to arbitrary pose and lighting. For each subject, a 3D face model is constructed by integrating several 2.5D face scans which are captured from different views. 2.5D is a simplified 3D (x, y, z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. Two different modalities provided by the facial scan, namely, shape and texture, are utilized and integrated for face matching. The recognition engine consists of two components, surface matching and appearance-based matching. The surface matching component is based on a modified Iterative Closest Point (ICP) algorithm. The candidate list from the gallery used for appearance matching is dynamically generated based on the output of the surface matching component, which reduces the complexity of the appearance-based matching stage. Three-dimensional models in the gallery are used to synthesize new appearance samples with pose and illumination variations and the synthesized face images are used in discriminant subspace analysis. The weighted sum rule is applied to combine the scores given by the two matching components. Experimental results are given for matching a database of 200 3D face models with 598 2.5D independent test scans acquired under different pose and some lighting and expression changes. These results show the feasibility of the proposed matching scheme

    Three-Dimensional Model Based Face Recognition

    No full text
    The performance of face recognition systems that use twodimensional (2D) images is dependent on consistent conditions such as lighting, pose and facial expression. We are developing a multi-view face recognition system that utilizes three-dimensional (3D) information about the face to make the system more robust to these variations. This paper describes a procedure for constructing a database of 3D face models and matching this database to 2.5D face scans which are captured from different views, using coordinate system invariant properties of the facial surface. 2.5D is a simplified 3D (x, y, z) surface representation that contains at most one depth value (z direction) for every point in the (x, y) plane. A robust similarity metric is defined for matching, based on an Iterative Closest Point (ICP) registration process. Results are given for matching a database of 18 3D face models with 113 2.5D face scans
    corecore